26 research outputs found

    Towards Communicating Agents and Avatars in Virtual Worlds

    Get PDF
    We report about ongoing research in a virtual reality environment where visitors can interact with agents that help them to obtain information, to perform certain transactions and to collaborate with them in order to get some tasks done. In addition, in a multi-user version of the system visitors can chat with each other. Our environment is a laboratory for research and for experiments with users interacting with agents in multimodal ways, referring to visualized information and making use of knowledge possessed by domain agents, but also by agents that represent other visitors of this environment. We discuss standards that are under development for designing such environments. Our environment models a local theatre in our hometown. We discuss our attempts to let this environment evolve into a theatre community where we do not only have goal-directed visitors buying tickets, but also visitors that that are not yet sure whether they want to buy or just want information or visitors who just want to look around, talk with others, etc. It is shown that we need a multi-user and multi-agent environment to realize our goals and that we need to have a unifying framework in order to be able to introduce and maintain different agents and user avatars with different abilities, including intellectual, interaction and animation abilities

    Bundeling van conferentieverslagen

    Get PDF
    In dit artikel wordt beschreven hoe proceedings voor een workshop of conferentie gemaakt kunnen worden met behulp van PdfLaTEX en de packages pdfpages, fancyhdr en hyperref

    TwNC: a Multifaceted Dutch News Corpus

    Get PDF
    This contribution describes the Twente News Corpus (TwNC), a multifaceted corpus for Dutch that is being deployed in a number of NLP research projects among which tracks within the Dutch national research programme MultimediaN, the NWO programme CATCH, and the Dutch-Flemish programme STEVIN.\ud \ud The development of the corpus started in 1998 within a predecessor project DRUID and has currently a size of 530M words. The text part has been built from texts of four different sources: Dutch national newspapers, television subtitles, teleprompter (auto-cues) files, and both manually and automatically generated broadcast news transcripts along with the broadcast news audio. TwNC plays a crucial role in the development and evaluation of a wide range of tools and applications for the domain of multimedia indexing, such as large vocabulary speech recognition, cross-media indexing, cross-language information retrieval etc. Part of the corpus was fed into the Dutch written text corpus in the context of the Dutch-Belgian STEVIN project D-COI that was completed in 2007. The sections below will describe the rationale that was the starting point for the corpus development; it will outline the cross-media linking approach adopted within MultimediaN, and finally provide some facts and figures about the corpus

    Preface

    Get PDF
    These are the proceedings of the 3rd International Conference on Intelligent Technologies for Interactive Entertainment (INTETAIN 09). The first edition of this conference, organised in Madonna di Campiglio, saw the gathering of a diverse audience with broad and varied interests. With presentations on topics ranging from underlying technology to intelligent interaction and entertainment applications, several inspiring invited lectures, a demonstration session and a hands-on design garage, that first edition of INTETAIN generated a lot of interaction between participants in a lively atmosphere. We hope that we have managed to continue this direction with the third edition, which will take place in Amsterdam, following the second edition held in Cancun. The submissions for short and long papers this year show a certain focus on topics such as emergent games, exertion interfaces and embodied interaction, but also cover important topics of the previous editions, such as, affective user interfaces, story telling, sensors, tele-presence in entertainment, animation, edutainment, and (interactive) art. The presentation of the accepted papers, together with the many interactive demonstrations of entertainment and art installations, and other participative activities to be held during the conference, should go some way towards recreating the open and interactive atmosphere that has been the goal of INTETAIN since its beginning. In addition to the aforementioned papers and demonstrations, we are happy to present contributions from three excellent invited speakers for INTETAIN 09. Matthias Rauterberg of Eindhoven University, in his contribution titled “Entertainment Computing, Social Transformation and the Quantum Field��?, takes a broad view as he discusses positive aspects of entertainment computing regarding its capacity for social transformation. Michael Mateas, of the University of California, Santa Cruz, talks about his work in interactive art and storytelling. Antonio Camurri, of InfoMus Lab, Genova, discusses an approach to Human Music Interaction that assigns a more active role to users listening to and interacting with music, in his contribution titled “Non-verbal full body emotional and social interaction: a case study on multimedia systems for active music listening��?

    Dialogues with a talking face for web-based services and transactions

    Get PDF
    In this paper we discuss our research on interactions in a virtual theatre that has been built using VRML and therefore can be accessed through Web pages. In\ud the virtual environment we employ several agents. The virtual theatre allows navigation input through keyboard and mouse, but there is also a navigation\ud agent which listens to typed input and spoken commands. Feedback of the system is given using speech synthesis. We also have an information agent which allows a natural language dialogue with the system where the input is keyboard-driven and the output is both with tables as with template driven natural language generation. In development are several talking faces for the different agents in the virtual world. At this moment an avatar with a cartoon-like talking face driven by a text-to-speech synthesizer can provide users with information about performances in the theatre

    A multimodal interaction system for navigation

    Get PDF
    To help users find their way in a virtual theatre we developed a navigation agent. In natural language dialogue the agent assists users looking for the location of an object or room, and it shows routes between locations. The speech-based dialogue system allows users to ask questions such as “Where is the coffee bar?” and “How do I get to the great hall?” The agent has a map and can mark locations and routes; users can click on locations and ask questions about them

    Browsing and Searching the Spoken Words of Buchenwald Survivors

    Get PDF
    The ‘Buchenwald’ project is the successor of the ‘Radio Oranje’ project that aimed at the transformation of a set of World War II related mono-media documents –speeches of the Dutch Queen Wilhelmina, textual transcripts of the speeches, and a database of WWII related photographs– to an attractive online multimedia presentation of the Queen’s speeches with keyword search functionality [6, 3]. The ‘Buchenwald’ project links up and extends the ‘Radio Oranje’ approach. The goal in the project was to develop a Dutch multimedia information portal on World War II concentration camp Buchenwald1. The portal holds both textual information sources and a video collection of testimonies from 38 Dutch camp survivors with durations between a half and two and a half hours. For each interview, an elaborate description, a speaker profile and a short summary are available
    corecore